Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Spatial-temporal traffic flow prediction model based on gated convolution
Li XU, Xiangyuan FU, Haoran LI
Journal of Computer Applications    2023, 43 (9): 2760-2765.   DOI: 10.11772/j.issn.1001-9081.2022081146
Abstract343)   HTML21)    PDF (2271KB)(200)       Save

Concerning the problems that the existing traffic flow prediction models cannot accurately capture the spatio-temporal features of traffic data, and most models show good prediction performance in single-step prediction, and the prediction performance of models in multi-step prediction is not ideal, a Spatio-Temporal Traffic Flow Prediction Model based on Gated Convolution (GC-STTFPM) was proposed. Firstly, the Graph Convolution Network (GCN) combining with Gated Recurrent Unit (GRU) was used to capture the spatio-temporal features of traffic flow data. Then, a method of splicing and filtering the original data and spatio-temporal feature data by using gated convolution unit was proposed to verify the validity of spatio-temporal feature data. Finally, GRU was used as the decoder to make accurate and reliable prediction of future traffic flow. Experimental results on traffic dataset of Los Angeles Highway show that compared with Attention based Spatial-Temporal Graph Neural Network (ASTGNN) and Diffusion Convolutional Recurrent Neural Network (DCRNN) under single step prediction (5 min), GC-STGCN model has the Mean Absolute Error (MAE) reduced by 5.9% and 9.9% respectively, and the Root Mean Square Error (RMSE) reduced by 1.7% and 5.8% respectively. At the same time, it is found that the prediction accuracy of this model is better than those of most existing benchmark models under three multi-step scales of 15, 30 and 60 min, demonstrating strong adaptability and robustness.

Table and Figures | Reference | Related Articles | Metrics
Real‑time detection method of traffic information based on lightweight YOLOv4
GUO Keyou, LI Xue, YANG Min
Journal of Computer Applications    2023, 43 (1): 74-80.   DOI: 10.11772/j.issn.1001-9081.2021101849
Abstract548)   HTML18)    PDF (3243KB)(340)       Save
Aiming at the problem of vehicle objection detection in daily road scenes, a real?time detection method of traffic information based on lightweight YOLOv4 (You Only Look Once version 4) was proposed. Firstly, a multi?scene and multi?period vehicle object dataset was constructed, which was preprocessed by K?means++ algorithm. Secondly, a lightweight YOLOv4 detection model was proposed, in which the backbone network was replaced by MobileNet?v3 to reduce the number of parameters of the model, and the depth separable convolution was introduced to replace the standard convolution in the original network. Finally, combined with label smoothing and annealing cosine algorithms, the activation function Leaky Rectified Linear Unit (LeakyReLU) was used to replace the original activation function in the shallow network of MobileNet?v3 in order to optimize the convergence effect of the model. Experimental results show that the lightweight YOLOv4 has the weight file of 56.4 MB, the detection rate of 85.6 FPS (Frames Per Second), and the detection precision of 93.35%, verifying that the proposed method can provide the reference for the real?time traffic information detection and its applications in real road scenes.
Reference | Related Articles | Metrics
Self-generated deep neural network based 4D trajectory prediction
LI Xujuan, PI Jianyong, HUANG Feixiang, JIA Haipeng
Journal of Computer Applications    2021, 41 (5): 1492-1499.   DOI: 10.11772/j.issn.1001-9081.2020081198
Abstract382)      PDF (1396KB)(512)       Save
Since 4-Dimensional (4D) trajectory prediction is not real-time and has the iterative error, an Automatically generated Conditional Variational Auto-Encoder (AutoCVAE) was proposed. It is in the form of encoding-decoding to predict the future trajectory directly, and can select observation number and prediction step flexibly. The method was guided by the preprocessed Automatic Dependent Surveillance-Broadcast (ADS-B) data, and with the reduction of the prediction error as the goal. By means of Bayesian optimization, the model structure was searched within the predefined search space. The hyper parameter values of each time were chosen by referencing the previous evaluation results, so that the structure of the new model obtained in each time was able to be closer to the target, and ultimately, a high precision 4D trajectory prediction model based on ADS-B data was completed. In the experiments, the proposed model was able to predict the trajectory quickly and accurately in real time with the Mean Absolute Error (MAE) of both latitude and longitude less than 0.03 degrees, the altitude MAE under 30 m, the time error at each time point not exceeded 10 s, and each batch trajectory prediction delay within 0.2 s.
Reference | Related Articles | Metrics
Review of event causality extraction based on deep learning
WANG Zhujun, WANG Shi, LI Xueqing, ZHU Junwu
Journal of Computer Applications    2021, 41 (5): 1247-1255.   DOI: 10.11772/j.issn.1001-9081.2020071080
Abstract2882)      PDF (1460KB)(3417)       Save
Causality extraction is a kind of relation extraction task in Natural Language Processing (NLP), which mines event pairs with causality from text by constructing event graph, and play important role in applications of finance, security, biology and other fields. Firstly, the concepts such as event extraction and causality were introduced, and the evolution of mainstream methods and the common datasets of causality extraction were described. Then, the current mainstream causality extraction models were listed. Based on the detailed analysis of pipeline based models and joint extraction models, the advantages and disadvantages of various methods and models were compared. Furthermore, the experimental performance and related experimental data of the models were summarized and analyzed. Finally, the research difficulties and future key research directions of causality extraction were given.
Reference | Related Articles | Metrics
Summarization of natural language generation
LI Xueqing, WANG Shi, WANG Zhujun, ZHU Junwu
Journal of Computer Applications    2021, 41 (5): 1227-1235.   DOI: 10.11772/j.issn.1001-9081.2020071069
Abstract2671)      PDF (1165KB)(3709)       Save
Natural Language Generation (NLG) technologies use artificial intelligence and linguistic methods to automatically generate understandable natural language texts. The difficulty of communication between human and computer is reduced by NLG, which is widely used in machine news writing, chatbot and other fields, and has become one of the research hotspots of artificial intelligence. Firstly, the current mainstream methods and models of NLG were listed, and the advantages and disadvantages of these methods and models were compared in detail. Then, aiming at three NLG technologies:text-to-text, data-to-text and image-to-text, the application fields, existing problems and current research progresses were summarized and analyzed respectively. Furthermore, the common evaluation methods and their application scopes of the above generation technologies were described. Finally, the development trends and research difficulties of NLG technologies were given.
Reference | Related Articles | Metrics
Robust texture representation by combining differential feature and Haar wavelet decomposition
LIU Wanghua, LIU Guangshuai, CHEN Xiaowen, LI Xurui
Journal of Computer Applications    2020, 40 (9): 2728-2736.   DOI: 10.11772/j.issn.1001-9081.2020010032
Abstract341)      PDF (1923KB)(334)       Save
Aiming at the problem that traditional local binary pattern operators lack deep-level correlation information between pixels and have poor robustness to common blurring and rotation changes in images, a robust texture expression operator combining differential features and Haar wavelet decomposition was proposed. In the differential feature channel, the first-order and second-order differential features in the image were extracted by the isotropic differential operators, so that the differential features of the image were essentially invariant to rotation and robust to image blur. In the wavelet decomposition feature extraction channel, based on the characteristic that the wavelet transform has good localization in the time domain and frequency domain at the same time, multi-scale two-dimensional Haar wavelet decomposition was used to extract blurring robustness features. Finally, the feature histograms on the two channels were concatenated to construct a texture description of the image. In the feature discrimination experiments, the accuracy of the proposed operator on the complex UMD, UIUC and KTH-TIPS texture databases reaches 98.86%, 98.2% and 99.05%, respectively, and compared with that of the MRELBP (Median Robust Extended Local Binary Pattern) operator, the accuracy increases by 0.26%, 1.32% and 1.12% respectively. In the robustness analysis experiments on rotation change and image blurring, the classification accuracy of the proposed operator on the TC10 texture database with only rotation changes reaches 99.87%, and the classification accuracy decrease of the proposed operator on the TC11 texture database with different levels of Gaussian blurs is only 6%. In the computational complexity experiments, the feature dimension of the proposed operator is only 324, and the average feature extraction time of the proposed operator on the TC10 texture database is 30.9 ms. Experimental results show that the method combining differential feature and Haar wavelet decomposition has strong feature discriminability and strong robustness to rotation and blurring, as well as has low computational complexity. It has good applicability in situations with small database.
Reference | Related Articles | Metrics
Point cloud compression method combining density threshold and triangle group approximation
ZHONG Wenbin, SUN Si, LI Xurui, LIU Guangshuai
Journal of Computer Applications    2020, 40 (7): 2059-2068.   DOI: 10.11772/j.issn.1001-9081.2019111909
Abstract261)      PDF (4027KB)(246)       Save
For the difficulty in balancing compression precision and compression time in the compression of non-uniformly collected point cloud data, a compression method combining density threshold and triangle group approximation was proposed, and the triangle group was constructed by setting the density threshold of non-empty voxels obtained by the octree division in order to realize the point cloud surface simulation. Firstly, the vertices of triangles were determined according to the distribution of the points in the voxel. Secondly, the vertices were sorted to generate each triangle. Finally, the density threshold was introduced to construct the rays parallel to the coordinate axis, and the subdivision points on different density regions were generated according to the intersections of the triangles and the rays. Using the point cloud data of dragon, horse, skull, radome, dog and PCB, the improved regional center of gravity method, the curvature-based compression method, the improved curvature-grading-based compression method, the K-neighborhood cuboid method and the proposed method were compared. The experimental results show that:under the same voxel size, the feature expression of the proposed method is better than that of the improved regional center of gravity method; in the case of close compression ratio, the proposed method is superior to the curvature-based compression method, the curvature-grading-based compression method and the K-neighborhood cuboid method in time cost; in the term of compression accuracy, the maximum deviation, standard deviation and surface area change rate of the model built by the proposed method are all better than those of the models built by the improved regional center of gravity method, the curvature-based compression method, the curvature-grading-based compression method and the K-neighborhood cuboid method. The experimental results show that the proposed method can effectively compress the point cloud in a short time while retaining the feature information well.
Reference | Related Articles | Metrics
Face recognition combining weighted information entropy with enhanced local binary pattern
DING Lianjing, LIU Guangshuai, LI Xurui, CHEN Xiaowen
Journal of Computer Applications    2019, 39 (8): 2210-2216.   DOI: 10.11772/j.issn.1001-9081.2019010181
Abstract460)      PDF (1131KB)(329)       Save
Under the influence of illumination, pose, expression, occlusion and noise, the recognition rate of faces is excessively low, therefore a method combining weighted Information Entropy (IEw) with Adaptive-Threshold Ring Local Binary Pattern (ATRLBP) (IEwATR-LBP) was proposed. Firstly, the information entropy was extracted from the sub-blocks of the original face image, and then the IEw of each sub-block was obtained. Secondly, the probability histogram was obtained by using ATRLBP operator to extract the features of face sub-blocks. Finally, the final feature histogram of original face image was obtained by concatenating the multiplications of each IEw with the probability histogram, and the recognition result was calculated through Support Vector Machine (SVM). In the comparison experiments on the illumination, pose, expression and occlusion datasets from AR face database, the proposed method achieved recognition rates of 98.37%, 94.17%, 98.20%, and 99.34% respectively; meanwile, it also achieved the maximum recognition rate of 99.85% on ORL face database. And the average recognition rates in 5 experiments with different training samples were compared to conclude that the recognition rate of samples with Gauss noise was 14.04 percentage points lower than that of samples without noise, while the recognition rate of samples with salt & pepper noise was only 2.95 percentage points lower than that of samples without noise. Experimental results show that the proposed method can effectively improve the recognition rate of faces under the influence of illumination, pose, occlusion, expression and impulse noise.
Reference | Related Articles | Metrics
Temporal evidence fusion method with consideration of time sequence preference of decision maker
LI Xufeng, SONG Yafei, LI Xiaonan
Journal of Computer Applications    2019, 39 (6): 1626-1631.   DOI: 10.11772/j.issn.1001-9081.2018102218
Abstract368)      PDF (873KB)(212)       Save
Aiming at temporal uncertain information fusion problem, to fully reflect the dynamic characteristic and the influence of time factor on temporal information fusion, a temporal evidence fusion method was proposed with considering decision maker's preference for time sequence based on evidence theory. Firstly, time sequence preference of decision maker was fused to temporal evidence fusion, through the analysis of characteristics of temporal evidence sequence, decision maker's preference for time sequence was measured based on the definition of temporal memory factor. Then, the evidence source was revised by time sequence weight vector obtained by constructing the optimal model and evidence credibility idea. Finally, the revised evidences were fused by Dempster combination rule. Numerical examples show that compared with other fusion methods without considering time factor, the proposed method can deal with conflicting information in temporal information sequence effectively and obtain a reasonable fusion effect; meanwhile, with the consideration of the credibility of temporal evidence sequence and the subjective preference of decision maker, the proposed method can reflect the influence of subjective factors of decision maker on temporal evidence fusion, giving a good expression to the dynamic characteristic of temporal evidence fusion.
Reference | Related Articles | Metrics
Contextual authentication method based on device fingerprint of Internet of Things
DU Junxiong, CHEN Wei, LI Xueyan
Journal of Computer Applications    2019, 39 (2): 464-469.   DOI: 10.11772/j.issn.1001-9081.2018081955
Abstract431)      PDF (1014KB)(329)       Save
Aiming at the security problem of remote control brought by illegal device access in Internet of Things (IoT), a contextual authentication method based on device fingerprint was proposed. Firstly, the fingerprint of IoT device was extracted by a proposed single byte analysis method in the interaction traffic. Secondly, the process framework of the authentication was proposed, and the identity authentication was performed according to six contextual factors including device fingerprint. Finally, in the experiments on IoT devices, relevant device fingerprint features were extracted and decision tree classification algorithms were combined to verify the feasibility of contextual authentication method. Experimental results show that the classification accuracy of the proposed method is 90%, and the 10% false negative situations are special cases but also meet the certification requirements. The results show that the contextual authentication method based on the fingerprint of IoT devices can ensure that only trusted IoT terminal equipment access the network.
Reference | Related Articles | Metrics
Attribute reduction of relative indiscernibility relation and discernibility relation in relation decision system
LI Xu, RONG Zijing, RUAN Xiaoxi
Journal of Computer Applications    2019, 39 (10): 2852-2858.   DOI: 10.11772/j.issn.1001-9081.2019030438
Abstract370)      PDF (980KB)(198)       Save
Corresponding reduction algorithms for relative indiscernibility and discernibility relation were proposed. Firstly, considering the reduction of the relative indiscernibility relation in equivalence relation, the corresponding discernibility matrix was proposed and a reduction algorithm was proposed based on the matrix. Then, a reduction algorithm for relative discernibility relation was proposed according to the complementary relationship of the relation. Secondly, the concepts such as relative indiscernibility relation were expanded to the general relation. The corresponding discernibility matrix was proposed for the relative indiscernibility relation reduction in the relation decision system, and the corresponding discernibility matrix for the relative discernibility relation reduction was obtained by using the complementary relationship of the relation, so the reduction algorithms for both relations were obtained. Finally, the proposed algorithms were verified on the selected UCI datasets. In the equivalence relation, the algorithm of the relative EQuivalence INDiscernibility relation reduction based on absolute reduction (EQIND) and the algorithm of the relative BInary INDiscernibility relation reduction (BⅡND) have the same results. The algorithm of the relative EQuivalence DIScernibility relation reduction based on absolute reduction (EQDIS) and the algorithm of the relative BInary DIScernibility relation reduction (BIDIS) have the same results. Meanwhile, BⅡND and BIDIS are suitable for the incomplete decision table. The feasibility of the proposed algorithms were verified by the experimental results.
Reference | Related Articles | Metrics
HIC-MedRank:improved drug recommendation algorithm based on heterogeneous information network
ZOU Linlin, LI Xueming, LI Xue, YUAN Hong, LIU Xing
Journal of Computer Applications    2017, 37 (8): 2368-2373.   DOI: 10.11772/j.issn.1001-9081.2017.08.2368
Abstract546)      PDF (1110KB)(637)       Save
With the rapid growth of medical literature, it is difficult for physicians to maintain up-to-date knowledge by reading biomedical literatures. An algorithm named MedRank can be used to recommend influential medications from literature by analyzing information network, based on the assumption that "a good treatment is likely to be found in a good medical article published in a good journal, written by good author(s)", recomending the most effective drugs for all types of disease patients. But the algorithm still has several problems:1) the diseases, as the inputs, are not independent; 2) the outputs are not specific drugs; 3) some other factors such as the publication time of the article are not considered; 4) there is no definition of "good" for the articles, journals and authors. An improved algorithm named HIC-MedRank was proposed by introducing H-index of authors, impact factor of journals and citation count of articles as criterion for defining good authors, journals and articles, and recommended antihypertensive agents for the patients suffered from Hypertension with Chronic Kidney Disease (CKD) by considering published time, support institutions, publishing type and some other factors of articles. The experimental results on Medline datasets show that the recommendation drugs of HIC-MedRank algorithm are more precise than those of MedRank, and are more recognized by attending physicians. The consistency rate is up to 80% by comparing with the JNC guidelines.
Reference | Related Articles | Metrics
Well-formedness checking algorithm of interface automaton and its realization
LI Xue, ZHU Jiagang
Journal of Computer Applications    2017, 37 (2): 574-580.   DOI: 10.11772/j.issn.1001-9081.2017.02.0574
Abstract561)      PDF (1185KB)(447)       Save
To address the issue that the non-well-formed components in a component-based system may lead to the whole system working abnormally, an algorithm for checking the well-formedness of a component was proposed based on its Interface Automaton (IA) model, and a relevant prototype tool was developed. Firstly, the reachability graph isomorphic with the given IA was constructed. Secondly, an ordered set including all the transitions of the reachability graph relevant to the IA was obtained by depth-first-searching the reachability graph. Finally, the well-formedness check of a given IA was completed by checking whether each action belonging to a method in the IA could autonomously reach its return action without exception according to the ordered set under the condition that the external environment meets the input hypothesis. As a realization of the proposed algorithm, a relevant prototype tool was developed on Eclipse platform, namely T-CWFC (Tool for Component Well-Formedness Checking). The prototype tool can model the given component, set up its reachability graph, check its well-formedness and output check result message. The validity of the proposed algorithm was verified by running the tool on a set of components.
Reference | Related Articles | Metrics
Unsupervised video segmentation by fusing multiple spatio-temporal feature representations
LI Xuejun, ZHANG Kaihua, SONG Huihui
Journal of Computer Applications    2017, 37 (11): 3134-3138.   DOI: 10.11772/j.issn.1001-9081.2017.11.3134
Abstract543)      PDF (1045KB)(478)       Save
Due to random movement of the segmented target, rapid change of background, arbitrary variation and shape deformation of object appearance, in this paper, a new unsupervised video segmentation algorithm based on multiple spatial-temporal feature representations was presented. By combination of salient features and other features obtained from pixels and superpixels, a coarse-to-fine-grained robust feature representation was designed to represent each frame in a video sequence. Firstly, a set of superpixels was generated to represent foreground and background in order to improve computational efficiency and get segmentation results by graph-cut algorithm. Then, the optical flow method was used to propagate information between adjacent frames, and the appearance of each superpixel was updated by its non-local sptatial-temporal features generated by nearest neighbor searching method with efficient K-Dimensional tree (K-D tree) algorithm, so as to improve robustness of segmentation. After that, for segmentation results generated in superpixel-level, a new Gaussian mixture model based on pixels was constructed to achieve pixel-level refinement. Finally, the significant feature of image was introduced, as well as segmentation results generated by graph-cut and Gaussian mixture model, to obtain more accurate segmentation results by voting scheme. The experimental results show that the proposed algorithm is a robust and effective segmentation algorithm, which is superior to most unsupervised video segmentation algorithms and some semi-supervised video segmentation algorithms.
Reference | Related Articles | Metrics
Rapid displacement compensation method for liquid impurity detection images
RUAN Feng, ZHANG Hui, LI Xuanlun
Journal of Computer Applications    2016, 36 (12): 3442-3447.   DOI: 10.11772/j.issn.1001-9081.2016.12.3442
Abstract607)      PDF (1067KB)(324)       Save
When the intelligent inspection machine extracts the impurity in infusion liquid, because of the interference of image displacement deviation, the misjudgment phenomenon always occurs when using the frame difference method to detect the impurity. In order to solve the problem, a new method of binary descriptor block matching was proposed based on Features of Accelerated Segment Test (FAST). Firstly, the feature points were detected by accelerating the segment test on different scales of the image, and the best feature point was chosen by using non-maximal suppression and the entropy difference. Then, the improved template was used for sampling around the feature point, which formed the new binary descriptor with strong robustness to scale changes, noise interference and illumination changes. The dimension of new descriptor was further reduced. Finally, by using the block matching and threshold method, the two frame images were matched quickly and accurately, and the displacement deviation was solved and compensated. The experimental results show that, when processing the 1.92 million pixel image, the overall real-time performance of the proposed method can be up to 190 ms, and the new descriptor generation only accounts 96 ms. The matching accuracy of the proposed algorithm is more than 99%, which suppresses the error matching of large spatial position offset successfully. The calculated deviation error of the proposed method is much less than the existing algorithms of Scale Invariant Feature Transform (SIFT) and ORiented Binary robust independent elementary features (ORB) with high matching precision. And with the displacement compensation which can be accurate to sub-pixel level, the proposed method can rapidly compensate the displacement deviation of the bottle in the image.
Reference | Related Articles | Metrics
Review helpfulness based on opinion support of user discussion
LI Xueming, ZHANG Chaoyang, SHE Weijun
Journal of Computer Applications    2016, 36 (10): 2767-2771.   DOI: 10.11772/j.issn.1001-9081.2016.10.2767
Abstract408)      PDF (941KB)(650)       Save
Focusing on the issues in review helpfulness prediction methods that training datasets are difficult to construct in supervised models and unsupervised methods do not take sentiment information in to account, an unsupervised model combining semantics and sentiment information was proposed. Firstly, opinion helpfulness score was calculated based on opinion support score of reviews and replies, and then review helpfulness score was calculated. In addition, a review summary method combining syntactic analysis and improved Latent Dirichlet Allocation (LDA) model was proposed to extract opinions for review helpfulness prediction, and two kinds of constraint conditions named must-link and cannot-link were constructed to guide topic learning based on the result of syntactic analysis, which can improve the accuracy of the model with ensuring the recall rate. The F1 value of the proposed model is 70% and the sorting accuracy is nearly 90% in the experimental data set, and the instance also shows that the proposed model has good explanatory ability.
Reference | Related Articles | Metrics
Massive terrain data storage based on HBase
LI Zhenju, LI Xuejun, XIE Jianwei, LI Yannan
Journal of Computer Applications    2015, 35 (7): 1849-1853.   DOI: 10.11772/j.issn.1001-9081.2015.07.1849
Abstract528)      PDF (807KB)(671)       Save

With the development of remote sensing technology, the data type and data volume of remote sensing data has increased dramatically in the past decades which is a challenge for traditional storage mode. A combination of quadtree and Hilbert spatial index was proposed in this paper to solve the the low storage efficiency in HBase data storage. Firstly, the research status of traditional terrain data storage and data storage based on HBase was reviewed. Secondly the design idea on the combination of quadtree and Hilbert spatial index based on managing global data was proposed. Thirdly the algorithm for calculating the row and column number based on the longitude and latitude of terrain data, and the algorithm for calculating the final Hilbert code was designed. Finally, the physical storage infrastructure for the index was designed. The experimental results illustrate that the data loading speed in Hadoop cluster improved 63.79%-78.45% compared to the single computer, the query time decreases by 16.13%-39.68% compared to the traditional row key index, the query speed is at least 14.71 MB/s which can meet the requirements of terrain data visualization.

Reference | Related Articles | Metrics
Efficient mining algorithm for uncertain data in probabilistic frequent itemsets
LIU Haoran, LIU Fang'ai, LI Xu, WANG Jiwei
Journal of Computer Applications    2015, 35 (6): 1757-1761.   DOI: 10.11772/j.issn.1001-9081.2015.06.1757
Abstract481)      PDF (911KB)(467)       Save

When using the way of pattern growth to construct tree structure, the exiting algorithms for mining probabilistic frequent itemsets suffer many problems, such as generating large number of tree nodes, occupying large memory space and having low efficiency. In order to solve these problems, a Progressive Uncertain Frequent Pattern Growth algorithm named PUFP-Growth was proposed. By the way of reading data in the uncertain database tuple by tuple, the proposed algorithm constructed tree structure as compact as Frequent Pattern Tree (FP-Tree) and updated dynamic array of expected value whose header table saved the same itemsets. When all transactions were inserted into the Progressive Uncertain Frequent Pattern tree (PUFP-Tree), all the probabilistic frequent itemsets could be mined by traversing the dynamic array. The experimental results and theoretical analysis show that PUFP-Growth algorithm can find the probabilistic frequent itemsets effectively. Compared with the Uncertain Frequent pattern Growth (UF-Growth) algorithm and Compressed Uncertain Frequent-Pattern Mine (CUFP-Mine) algorithm, the proposed PUFP-Growth algorithm can improve mining efficiency of probabilistic frequent itemsets on uncertain dataset and reduce memory usage to a certain degree.

Reference | Related Articles | Metrics
MapReduce performance model based on multi-phase dividing
LI Zhenju, LI Xuejun, YANG Sheng, LIU Tao
Journal of Computer Applications    2015, 35 (12): 3374-3377.   DOI: 10.11772/j.issn.1001-9081.2015.12.3374
Abstract559)      PDF (712KB)(328)       Save
In order to resolve the low precision and complexity problem of the existing MapReduce model caused by the reasonable phase partitioning granularity, a multi-phase MapReduce Model (MR-Model) with 5 partition granularities was proposed. Firstly, the research status of MapReduce model was reviewed. Secondly, the MapReduce job was divided into 5 phases of Read, Map, Shuffle, Reduce, Write and the specific processing time of each phase was studied. Finally, the MR-model prediction performance was tested by experiments. The experimental results show that MR-Model is suitable for the MapReduce actual job execution process. Compared with the two existing models of P-Model and H-Model, the time accuracy precision of MR-Model can be improved by 10%-30%; in the Reduce phase, its time accuracy precision can be improved by 2-3 times, the comprehensive property of the MR-Model is better.
Reference | Related Articles | Metrics
Evolution analysis method of microblog topic-sentiment based on dynamic topic sentiment combining model
LI Chaoxiong, HUANG Faliang, WEN Xiaoqian, LI Xuan, YUAN Chang'an
Journal of Computer Applications    2015, 35 (10): 2905-2910.   DOI: 10.11772/j.issn.1001-9081.2015.10.2905
Abstract458)      PDF (921KB)(452)       Save
For the problem of existing models' disability to analyze topic-sentiment evolution of microblogs, a Dynamic Topic Sentiment Combining Model (DTSCM) was proposed based on Topic Sentiment Combining Model (TSCM) and the emotional cycle theory. DTSCM could track the topic sentiment evolution trend and obtain the graph of topic sentiment evolution so as to analyze the evolution of topic and sentiment by capturing the topic and sentiment of microblogs in different time. The experimental results in real microblog corpus showed that, in contrast with state-of-the-art models Joint Sentiment/Topic (JST), Sentiment-Latent Dirichlet Allocation (S-LDA) and Dependency Phrases-Latent Dirichlet Allocation (DPLDA), the sentiment classification accuracy of DTSCM increased by 3.01%, 4.33% and 8.75% respectively,and DTSCM could obtain topic-sentiment evolution of microblogs. The proposed approach can not only achieve higher sentiment classification accuracy but also analyze topic-sentiment evolution of microblog, and it is helpful for public opinion analysis.
Reference | Related Articles | Metrics
Parallel implementation of OpenVX and 3D rendering on polymorphic graphics processing unit
YAN Youmei, LI Tao, WANG Pengbo, HAN Jungang, LI Xuedan, YAO Jing, QIAO Hong
Journal of Computer Applications    2015, 35 (1): 53-57.   DOI: 10.11772/j.issn.1001-9081.2015.01.0053
Abstract793)      PDF (742KB)(505)       Save

For the image processing, computer vision and 3D rendering have the feature of massive parallel processing, the programmability and the flexible mode of parallel processing on the Polymorphic Array Architecture for Graphics (PAAG) platform were utilized adequately, the parallelism design method by combing the operation level parallelism with data level parallelism was used to implement the OpenVX Kernel functions and 3D rendering pipelines. The experimental results indicate that in the parallel implementation of image processing of OpenVX Kernel functions and graphics rendering, using Multiple Instruction Multiple Data (MIMD) of PAAG in parallel processing can obtain a linear speedup that the slope equals to 1, which achieves higher efficiency than the slope as nonlinear speedup that less than 1 of Single Instruction Multiple Data (SIMD) in traditional parallel processing of the Graphics Processing Unit (GPU).

Reference | Related Articles | Metrics
Relative orientation approach based on direct resolving and iterative refinement
YANG Ahua LI Xuejun LIU Tao LI Dongyue
Journal of Computer Applications    2014, 34 (6): 1706-1710.   DOI: 10.11772/j.issn.1001-9081.2014.06.1706
Abstract295)      PDF (723KB)(492)       Save

In order to improve the robustness and accuracy of relative orientation, an approach combining direct resolving and iterative refinement for relative orientation was proposed. Firstly, the essential matrix was estimated from some corresponding points. Afterwards the initial relative position and posture of two cameras were obtained by decomposing the essential matrix. The process for determining the only position and posture parameters were introduced in detail. Finally, by constructing the horizontal epipolar coordinate system, the constraint equation group was built up from the corresponding points based on the coplanar constraint, and the initial position and posture parameters were refined iteratively. The algorithm was resistant to the outliers by applying the RANdom Sample Consensus (RANSAC) strategy and dynamically removing outliers during iterative refinement. The simulation experiments illustrate the resolving efficiency and accuracy of the proposed algorithm outperforms that of the traditional algorithm under the circumstance of importing varies of random errors. And the experiment with real data demonstrates the algorithm can be effectively applied to relative position and posture estimation in 3D reconstruction.

Reference | Related Articles | Metrics
Object-based polarimetric decomposition method for polarimetric synthetic aperture radar images
LI Xuewei GUO Yiyou FANG Tao
Journal of Computer Applications    2014, 34 (5): 1473-1476.   DOI: 10.11772/j.issn.1001-9081.2014.05.1473
Abstract303)      PDF (777KB)(273)       Save

Object-oriented analysis of polarimetric Synthetic Aperture Radar (SAR) has been used commonly, while the polarimetric decomposition is still based on pixel, which is inefficient to extract polarimetric information. A object-based method was proposed for polarimetric decomposition. The coherent matrix of object was constructed by weighted iteration of scattering coefficient of similarity, and the convergence of coherent matrix was analyzed, therefore polarimetric information could be obtained through the coherent matrix of object instead of pixel, which can improve the efficiency of obtaining polarimetric features. To more fully reflect the terrain target, spatial features of object were extracted. After feature selection, polarimetric SAR image classification experiments using Support Vector Machine (SVM) demonstrate the effectiveness of the proposed method.

Reference | Related Articles | Metrics
Adaptive non-local denoising of magnetic resonance images based on normalized cross correlation
SHI Li XU Xiaohui CHEN Liwei
Journal of Computer Applications    2014, 34 (12): 3609-3613.  
Abstract178)      PDF (792KB)(651)       Save

In order to remove the Rician distribution noise in Magnetic Resonance (MR) images sufficiently, the Normalized Cross Correlation (NCC) of local pixel was proposed to characterize the geometric structure similarity, and was combined with the traditional method of using only pixel intensity to determine its similarity weight. Then the improved method was applied to the non-local mean algorithm and Non-local Linear Minimum Mean Square Error (NLMMSE) estimation algorithm respectively. In order to realize adaptive denoising, the weighted value of pixel to be filtered or the similarity threshold in non-local algorithms were computed according to the local Signal-to-Noise Ratio (SNR) dynamically. The experimental results show that the proposed algorithm not only can better inhibit the Rician noise in MR images, but also can effectively preserve image details, so it possesses a better application value in the further analysis research of MR images and clinical diagnosis.

Reference | Related Articles | Metrics
Analysis of global convergence of crossover evolutionary algorithm based on state-space model
WANG Dingxiang LI Maojun LI Xue CHENG Li
Journal of Computer Applications    2014, 34 (12): 3424-3427.  
Abstract329)      PDF (611KB)(604)       Save

Evolutionary Algorithm based on State-space model (SEA) is a novel real-coded evolutionary algorithm, it has good optimization effects in engineering optimization problems. Global convergence of crossover SEA (SCEA) was studied to promote the theory and application research of SEA. The conclusion that SCEA is not global convergent was drawn. Modified Crossover Evolutionary Algorithm based on State-space Model (SMCEA) was presented by changing the comstruction way of state evolution matrix and introducing elastic search operation. SMCEA is global convergent was proved by homogeneous finite Markov chain. By using two test functions to experimental analysis, the results show that the SMCEA are improved substantially in such aspects as convergence rate, ability of reaching the optimal value and operation time. Then, the effectiveness of SMCEA is proved and that SMCEA is better than Genetic Algorithm (GA) and SCEA was concluded.

Reference | Related Articles | Metrics
Query algorithm based on mesh structure in large-scale smart grid
WANG Yan HAO Xiuping SONG Baoyan LI Xuecheng XING Zengwei
Journal of Computer Applications    2014, 34 (11): 3126-3130.   DOI: 10.11772/j.issn.1001-9081.2014.11.3126
Abstract198)      PDF (841KB)(495)       Save

Currently, the query of transmission lines monitoring system in smart grid is mostly aiming at the global query of Wireless Sensor Network (WSN), which cannot satisfy the flexible and efficient query requirements based on any area. The layout and query characteristics of network were analyzed in detail, and a query algorithm based on mesh structure in large-scale smart grid named MSQuery was proposed. The algorithm aggregated the data of query nodes within different grids to one or more logical query trees, and an optimized path of collecting query result was built by the merging strategy of the logical query tree. Experiments were conducted among MSQuery, RSA which used routing structure for querying and SkySensor which used cluster structure for querying. The simulation results show that MSQuery can quickly return the query results in query window, reduce the communication cost, and save the energy of sensor nodes.

Reference | Related Articles | Metrics
Image denoising algorithm using fractional-order integral with edge compensation
HUANG Guo CHEN Qingli XU Li MEN Tao PU Yifei
Journal of Computer Applications    2014, 34 (10): 2957-2962.   DOI: 10.11772/j.issn.1001-9081.2014.10.2957
Abstract211)      PDF (1008KB)(379)       Save

To solve the problem of losing edge and texture information in the existing image denoising algorithms based on fractional-order integral, an image denoising algorithm using fractional-order integral with edge compensation was presented. The fractional-order integral operator has the performance of sharp low-pass. The Cauchy integral formula was introduced into digital image denoising, and the image numerical calculation of fractional-order integral was achieved by the method of slope approximation. In the process of iterative denoising, the algorithm built denoising mask by setting higher tiny fractional-order integral order at the rising stage of image Signal-to-Noise Ratio (SNR); and the algorithm built denoising mask by setting lower small fractional-order integral order at the declining stage of image SNR. Additionally, it could partially restore the image edge and texture information by the mechanism of edge compensation. The image denoising algorithm using fractional-order integral proposed in this paper makes use of different strategies of the fractional-order integral order and edge compensation mechanism in the process of iterative denoising. The experimental results show that compared with traditional denoising algorithm, the denoising algorithm proposed in this paper can remove the noise to obtain higher SNR and better visual effect while appropriately restoring the edge and texture information of image.

Reference | Related Articles | Metrics
Global convergence analysis of evolutionary algorithm based on state-space model
WANG Dingxiang LI Maojun LI Xue CHENG Li
Journal of Computer Applications    2014, 34 (10): 2816-2819.   DOI: 10.11772/j.issn.1001-9081.2014.10.2816
Abstract281)      PDF (635KB)(415)       Save

Evolutionary Algorithm based on State-space model (SEA) is a new evolutionary algorithm using real strings, and it has broad application prospects in engineering optimization problems. Global convergence of SEA was analyzed by homogeneous finite Markov chain to improve the theoretical system of SEA and promote the application research in engineering optimization problems of SEA. It was proved that SEA is not global convergent. Modified Elastic Evolutionary Algorithm based on State-space model (MESEA) was presented by limiting the value ranges of elements in state evolution matrix of SEA and introducing the elastic search. The analytical results show that search efficiency of SEA can be enhanced by introducing elastic search. The conclusion that MESEA is global convergent is drawn, and it provides theory basis for the application of algorithm in engineering optimization problems.

Reference | Related Articles | Metrics
Idle travel optimization of tool path on intensive multi-profile patterns
LI Xun CHEN Ming
Journal of Computer Applications    2014, 34 (1): 281-285.   DOI: 10.11772/j.issn.1001-9081.2014.01.0281
Abstract569)      PDF (805KB)(421)       Save
In the garment industry, shortening idle travel of tool path is important to efficiently cut these patterns from a piece of cloth. As these cutting patterns are of complex shapes and distributed intensively, it is not trivial to obtain the optimal cutting tool path via the existing algorithms. Based on MAX-MIN Ant System (MMAS), a new algorithm was proposed to optimize the idle travel of tool path. The algorithm consists of four steps: 1) using standard MMAS algorithm to define the pattern order; 2) seek the node as the tool entrance on each pattern; 3) optimize the node sequence with MMAS; 4) repeat the step 2) and 3) to achieve the optimal tool path. The experiments show that the proposed algorithm can effectively generate optimal tool path. Compared with the line-scanning algorithm and Novel Ant Colony System (NACS) algorithm, the result has been improved by 60.15% and 22.44% respectively.
Reference | Related Articles | Metrics
Identification method of spam comments in microblog based on AdaBoost
HUANG Ling LI Xueming
Journal of Computer Applications    2013, 33 (12): 3563-3566.  
Abstract674)      PDF (623KB)(425)       Save
In view of the existence of a lot of spam comments in microblog, a new method based on AdaBoost was proposed to identify spam comments. This method firstly extracted feature vectors which consisted of eight feature values to represent the comments, then trained several weak classifiers which were better than random prediction on these features via AdaBoost algorithm, and finally combined these weighted weak classifiers to build a strong classifier with a high precision. The experimental results on comment data sets extracted from the popular Sina microblogs indicate that the selected eight features are effective for the method, and it has a high recognition rate in the identification of spam comments in microblog.
Related Articles | Metrics